243 research outputs found
Exploring the similarity of medical imaging classification problems
Supervised learning is ubiquitous in medical image analysis. In this paper we
consider the problem of meta-learning -- predicting which methods will perform
well in an unseen classification problem, given previous experience with other
classification problems. We investigate the first step of such an approach: how
to quantify the similarity of different classification problems. We
characterize datasets sampled from six classification problems by performance
ranks of simple classifiers, and define the similarity by the inverse of
Euclidean distance in this meta-feature space. We visualize the similarities in
a 2D space, where meaningful clusters start to emerge, and show that the
proposed representation can be used to classify datasets according to their
origin with 89.3\% accuracy. These findings, together with the observations of
recent trends in machine learning, suggest that meta-learning could be a
valuable tool for the medical imaging community
Domain-adversarial neural networks to address the appearance variability of histopathology images
Preparing and scanning histopathology slides consists of several steps, each
with a multitude of parameters. The parameters can vary between pathology labs
and within the same lab over time, resulting in significant variability of the
tissue appearance that hampers the generalization of automatic image analysis
methods. Typically, this is addressed with ad-hoc approaches such as staining
normalization that aim to reduce the appearance variability. In this paper, we
propose a systematic solution based on domain-adversarial neural networks. We
hypothesize that removing the domain information from the model representation
leads to better generalization. We tested our hypothesis for the problem of
mitosis detection in breast cancer histopathology images and made a comparative
analysis with two other approaches. We show that combining color augmentation
with domain-adversarial training is a better alternative than standard
approaches to improve the generalization of deep learning methods.Comment: MICCAI 2017 Workshop on Deep Learning in Medical Image Analysi
Inferring a Third Spatial Dimension from 2D Histological Images
Histological images are obtained by transmitting light through a tissue
specimen that has been stained in order to produce contrast. This process
results in 2D images of the specimen that has a three-dimensional structure. In
this paper, we propose a method to infer how the stains are distributed in the
direction perpendicular to the surface of the slide for a given 2D image in
order to obtain a 3D representation of the tissue. This inference is achieved
by decomposition of the staining concentration maps under constraints that
ensure realistic decomposition and reconstruction of the original 2D images.
Our study shows that it is possible to generate realistic 3D images making this
method a potential tool for data augmentation when training deep learning
models.Comment: IEEE International Symposium on Biomedical Imaging (ISBI), 201
Roto-Translation Equivariant Convolutional Networks: Application to Histopathology Image Analysis
Rotation-invariance is a desired property of machine-learning models for
medical image analysis and in particular for computational pathology
applications. We propose a framework to encode the geometric structure of the
special Euclidean motion group SE(2) in convolutional networks to yield
translation and rotation equivariance via the introduction of SE(2)-group
convolution layers. This structure enables models to learn feature
representations with a discretized orientation dimension that guarantees that
their outputs are invariant under a discrete set of rotations. Conventional
approaches for rotation invariance rely mostly on data augmentation, but this
does not guarantee the robustness of the output when the input is rotated. At
that, trained conventional CNNs may require test-time rotation augmentation to
reach their full capability. This study is focused on histopathology image
analysis applications for which it is desirable that the arbitrary global
orientation information of the imaged tissues is not captured by the machine
learning models. The proposed framework is evaluated on three different
histopathology image analysis tasks (mitosis detection, nuclei segmentation and
tumor classification). We present a comparative analysis for each problem and
show that consistent increase of performances can be achieved when using the
proposed framework
Quantifying Graft Detachment after Descemet's Membrane Endothelial Keratoplasty with Deep Convolutional Neural Networks
Purpose: We developed a method to automatically locate and quantify graft
detachment after Descemet's Membrane Endothelial Keratoplasty (DMEK) in
Anterior Segment Optical Coherence Tomography (AS-OCT) scans. Methods: 1280
AS-OCT B-scans were annotated by a DMEK expert. Using the annotations, a deep
learning pipeline was developed to localize scleral spur, center the AS-OCT
B-scans and segment the detached graft sections. Detachment segmentation model
performance was evaluated per B-scan by comparing (1) length of detachment and
(2) horizontal projection of the detached sections with the expert annotations.
Horizontal projections were used to construct graft detachment maps. All final
evaluations were done on a test set that was set apart during training of the
models. A second DMEK expert annotated the test set to determine inter-rater
performance. Results: Mean scleral spur localization error was 0.155 mm,
whereas the inter-rater difference was 0.090 mm. The estimated graft detachment
lengths were in 69% of the cases within a 10-pixel (~150{\mu}m) difference from
the ground truth (77% for the second DMEK expert). Dice scores for the
horizontal projections of all B-scans with detachments were 0.896 and 0.880 for
our model and the second DMEK expert respectively. Conclusion: Our deep
learning model can be used to automatically and instantly localize graft
detachment in AS-OCT B-scans. Horizontal detachment projections can be
determined with the same accuracy as a human DMEK expert, allowing for the
construction of accurate graft detachment maps. Translational Relevance:
Automated localization and quantification of graft detachment can support DMEK
research and standardize clinical decision making.Comment: To be published in Translational Vision Science & Technolog
A comprehensive multi-domain dataset for mitotic figure detection
The prognostic value of mitotic figures in tumor tissue is well-established for many tumor types and automating this task is of high research interest. However, especially deep learning-based methods face performance deterioration in the presence of domain shifts, which may arise from different tumor types, slide preparation and digitization devices. We introduce the MIDOG++ dataset, an extension of the MIDOG 2021 and 2022 challenge datasets. We provide region of interest images from 503 histological specimens of seven different tumor types with variable morphology with in total labels for 11,937 mitotic figures: breast carcinoma, lung carcinoma, lymphosarcoma, neuroendocrine tumor, cutaneous mast cell tumor, cutaneous melanoma, and (sub)cutaneous soft tissue sarcoma. The specimens were processed in several laboratories utilizing diverse scanners. We evaluated the extent of the domain shift by using state-of-the-art approaches, observing notable differences in single-domain training. In a leave-one-domain-out setting, generalizability improved considerably. This mitotic figure dataset is the first that incorporates a wide domain shift based on different tumor types, laboratories, whole slide image scanners, and species
Corneal Pachymetry by AS-OCT after Descemet's Membrane Endothelial Keratoplasty
Corneal thickness (pachymetry) maps can be used to monitor restoration of
corneal endothelial function, for example after Descemet's membrane endothelial
keratoplasty (DMEK). Automated delineation of the corneal interfaces in
anterior segment optical coherence tomography (AS-OCT) can be challenging for
corneas that are irregularly shaped due to pathology, or as a consequence of
surgery, leading to incorrect thickness measurements. In this research, deep
learning is used to automatically delineate the corneal interfaces and measure
corneal thickness with high accuracy in post-DMEK AS-OCT B-scans. Three
different deep learning strategies were developed based on 960 B-scans from 50
patients. On an independent test set of 320 B-scans, corneal thickness could be
measured with an error of 13.98 to 15.50 micrometer for the central 9 mm range,
which is less than 3% of the average corneal thickness. The accurate thickness
measurements were used to construct detailed pachymetry maps. Moreover,
follow-up scans could be registered based on anatomical landmarks to obtain
differential pachymetry maps. These maps may enable a more comprehensive
understanding of the restoration of the endothelial function after DMEK, where
thickness often varies throughout different regions of the cornea, and
subsequently contribute to a standardized postoperative regime.Comment: Fixed typo in abstract: The development set consists of 960 B-scans
from 50 patients (instead of 68). The B-scans from the other 18 patients were
used for testing onl
- …